humane intelligence
Ask What Your Country Can Do For You: Towards a Public Red Teaming Model
Kennedy, Wm. Matthew, Patlak, Cigdem, Dave, Jayraj, Chambers, Blake, Dhanotiya, Aayush, Ramiah, Darshini, Schwartz, Reva, Hagen, Jack, Kundu, Akash, Pendharkar, Mouni, Baisley, Liam, Skeadas, Theodora, Chowdhury, Rumman
AI systems have the potential to produce both benefits and harms, but without rigorous and ongoing adversarial evaluation, AI actors will struggle to assess the breadth and magnitude of the AI risk surface. Researchers from the field of systems design have developed several effective sociotechnical AI evaluation and red teaming techniques targeting bias, hate speech, mis/disinformation, and other documented harm classes. However, as increasingly sophisticated AI systems are released into high-stakes sectors (such as education, healthcare, and intelligence-gathering), our current evaluation and monitoring methods are proving less and less capable of delivering effective oversight. In order to actually deliver responsible AI and to ensure AI's harms are fully understood and its security vulnerabilities mitigated, pioneering new approaches to close this "responsibility gap" are now more urgent than ever. In this paper, we propose one such approach, the cooperative public AI red-teaming exercise, and discuss early results of its prior pilot implementations. This approach is intertwined with CAMLIS itself: the first in-person public demonstrator exercise was held in conjunction with CAMLIS 2024. We review the operational design and results of this exercise, the prior National Institute of Standards and Technology (NIST)'s Assessing the Risks and Impacts of AI (ARIA) pilot exercise, and another similar exercise conducted with the Singapore Infocomm Media Development Authority (IMDA). Ultimately, we argue that this approach is both capable of delivering meaningful results and is also scalable to many AI developing jurisdictions.
- Asia > Singapore (0.36)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Wisconsin > Eau Claire County > Eau Claire (0.04)
- (6 more...)
The US Government Wants You--Yes, You--to Hunt Down Generative AI Flaws
At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This "red-teaming" exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software. The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA.
- North America > United States > Nevada > Clark County > Las Vegas (0.26)
- North America > United States > Virginia (0.06)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.77)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.65)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.57)
Can ethical AI surveillance exist? Data scientist Rumman Chowdhury doesn't think so
Texas residents share how familiar they are with artificial intelligence on a scale from one to 10 and detailed how much they use it each day. Rumman Chowdhury, the former director of machine learning ethics, transparency and accountability at Twitter, said at a recent talk that she does not believe ethical artificial intelligence surveillance can exist. "We cannot put lipstick on a pig," the data scientist noted at New York University's School of Social Sciences. "I do not think ethical surveillance can exist." In an interview published Monday in The Guardian – which spotlights that statement – Chowdhury warned that the rise of surveillance capitalism is hugely concerning to her.
- North America > United States > Texas (0.29)
- North America > United States > New York (0.26)
- North America > United States > Washington > King County > Redmond (0.06)
- Europe (0.06)